Web Survey Bibliography
Title Web surveys : Explaining and Reducing Unit Nonresponse, Item Nonresponse and Partial Nonresponse
Author Heerwegh, D.
Source Katholieke Universiteit Leuven, Doctorate
Year 2005
Access date 28.12.2005
Abstract Web surveys have recently become a popular survey data collection
method. The main reason for their popularity is a combination of two
benefits: high speed and low cost. Samples can be surveyed in shorter
periods of time and at a lower costper additional observation with a
web survey than with an offline survey method. However, web surveys
have the disadvantage of not reaching every member of the general
population, because not everyone has Internet access. This is the
coverage problem. Also, web surveys commonly obtain relatively low
response rates, which could introduce nonresponse bias (the nonresponse
problem). The latter type of bias occurs when those surveyed
(respondents) differ from those who did not participate in the survey
(nonrespondents). Since population parameter estimates are based on the
respondents, estimation errors can arise in this situation.
This thesis focused on the nonresponse problem, the aim being an
evaluation of methodological strategies regarding their ability to
increase the response rate of a web survey (one important aspect of
data quality). Although different forms of nonresponse were
distinguished (starting vs. not starting the web survey, completing vs.
not completing it, and answering all vs. not answering all survey
questions), this general aim to increase the amount of usable data (for
substantive analyses) was retained throughout the entire thesis. Since
this requires a study design capable of capturing causality, the thesis
relied on experiments. Based on a number of available theories,
theoretical frameworks were developed to guide the research questions
and hypotheses regarding the different types of nonresponse. In total,
8 methodological strategies were tested in 13 experiments and 9
different samples, containing in total 18,785 individuals. Since the
general population is not fully covered by the Internet, the
experiments were conductedwithin specific populations. Usually,
university students were targeted. This could limit the generality of
the findings of this thesis. The thesis acknowledges this limitation
and provides a discussion of it. It also explains how the obtained
results could be cautiously extrapolated to other specific populations.
In a first series of experiments, methodologies primarily aimed at
increasing the percentage of people starting the web survey (reflected
in the login rate) were tested. It was found that personalization of
the e-mail request significantly increases the web survey login rate.
The experiments also revealed that different types of information
regarding the survey length influence the percentage of survey
recipients that starts the web survey. Length statements that make the
survey sound shorter (or easier) lead to higher rates of respondents
starting the web survey. In an attempt to tailor the web survey request
to the particular characteristics of the sample units, the
effectiveness of a Frequently Asked Questions section was tested and,
in a longitudinal survey, the effectiveness of varying the e-mail
content was assessed. For a variety of possible reasons, these
methodologies were not able to influence the response to the web
surveys. A final result concerns the commonly used strategy to restrict
survey access to certain sample cases. It was found that an easier
access method does not necessarily increase the rate at which the web
survey is started.
In a second series of experiments, it was attempted to influence the
percentage of respondents (i.e. sample units that started the web
survey) that completes the web survey (the completion rate). In
addition, it was also investigated how the item nonresponse rate (the
percentage of presented questions not answered) could be influenced.
Because it is sometimes possible to reach the end of a web survey
without answering a single question, simply increasing the completion
rate is not sufficient. Respondents should answer as many of the
presented survey questions as possible. It was found that radio-buttons
are easier to use than drop-down boxes, which could lead to a higher
completion rate. Using a progress indicator did not increase the
completion rate, but it did decrease the item nonresponse rate.
Presenting reminder screens (prompts) during the survey when an
applicable question was not answered decreased the item nonresponse
rate and hence increased the completeness of the data.
In conclusion, the thesis shows that the response to web surveys can be
increased by applying a set of methodological strategies, which can be
framed within the theoretical perspectives presented in the thesis. Of
course, further research is needed to perfect the strategies used here
and to develop new effective ones. These possible venues for future
research are discussed in the thesis.
method. The main reason for their popularity is a combination of two
benefits: high speed and low cost. Samples can be surveyed in shorter
periods of time and at a lower costper additional observation with a
web survey than with an offline survey method. However, web surveys
have the disadvantage of not reaching every member of the general
population, because not everyone has Internet access. This is the
coverage problem. Also, web surveys commonly obtain relatively low
response rates, which could introduce nonresponse bias (the nonresponse
problem). The latter type of bias occurs when those surveyed
(respondents) differ from those who did not participate in the survey
(nonrespondents). Since population parameter estimates are based on the
respondents, estimation errors can arise in this situation.
This thesis focused on the nonresponse problem, the aim being an
evaluation of methodological strategies regarding their ability to
increase the response rate of a web survey (one important aspect of
data quality). Although different forms of nonresponse were
distinguished (starting vs. not starting the web survey, completing vs.
not completing it, and answering all vs. not answering all survey
questions), this general aim to increase the amount of usable data (for
substantive analyses) was retained throughout the entire thesis. Since
this requires a study design capable of capturing causality, the thesis
relied on experiments. Based on a number of available theories,
theoretical frameworks were developed to guide the research questions
and hypotheses regarding the different types of nonresponse. In total,
8 methodological strategies were tested in 13 experiments and 9
different samples, containing in total 18,785 individuals. Since the
general population is not fully covered by the Internet, the
experiments were conductedwithin specific populations. Usually,
university students were targeted. This could limit the generality of
the findings of this thesis. The thesis acknowledges this limitation
and provides a discussion of it. It also explains how the obtained
results could be cautiously extrapolated to other specific populations.
In a first series of experiments, methodologies primarily aimed at
increasing the percentage of people starting the web survey (reflected
in the login rate) were tested. It was found that personalization of
the e-mail request significantly increases the web survey login rate.
The experiments also revealed that different types of information
regarding the survey length influence the percentage of survey
recipients that starts the web survey. Length statements that make the
survey sound shorter (or easier) lead to higher rates of respondents
starting the web survey. In an attempt to tailor the web survey request
to the particular characteristics of the sample units, the
effectiveness of a Frequently Asked Questions section was tested and,
in a longitudinal survey, the effectiveness of varying the e-mail
content was assessed. For a variety of possible reasons, these
methodologies were not able to influence the response to the web
surveys. A final result concerns the commonly used strategy to restrict
survey access to certain sample cases. It was found that an easier
access method does not necessarily increase the rate at which the web
survey is started.
In a second series of experiments, it was attempted to influence the
percentage of respondents (i.e. sample units that started the web
survey) that completes the web survey (the completion rate). In
addition, it was also investigated how the item nonresponse rate (the
percentage of presented questions not answered) could be influenced.
Because it is sometimes possible to reach the end of a web survey
without answering a single question, simply increasing the completion
rate is not sufficient. Respondents should answer as many of the
presented survey questions as possible. It was found that radio-buttons
are easier to use than drop-down boxes, which could lead to a higher
completion rate. Using a progress indicator did not increase the
completion rate, but it did decrease the item nonresponse rate.
Presenting reminder screens (prompts) during the survey when an
applicable question was not answered decreased the item nonresponse
rate and hence increased the completeness of the data.
In conclusion, the thesis shows that the response to web surveys can be
increased by applying a set of methodological strategies, which can be
framed within the theoretical perspectives presented in the thesis. Of
course, further research is needed to perfect the strategies used here
and to develop new effective ones. These possible venues for future
research are discussed in the thesis.
Access/Direct link University doctorats
Year of publication2005
Bibliographic typeThesis, diplomas
Web survey bibliography - 2005 (76)
- The ethics of research using electronic mail discussion groups; 2005; Kralik, D., Warren, J., Koch, T., Pignone, G., Price, K.
- The Analyses of Domestic Study about Internet Survey; 2005; Rui, L., Tie-ying, S.
- Controlling the Baseline Speed of Respondents: An Empirical Evaluation of Data Treatment Methods of...; 2005; Mayerl, J.
- Determinanten der Rücklaufquote in Online-Panels; 2005; Batanic, B., Moser, K.
- On the cost-efficiency of probability sampling based mail surveys with a Web response option; 2005; Werner, P.
- Expert workshop on mixed mode data collection in comparative social surveys; 2005; Roberts, C.
- The Effect Of A Simultaneous Mixed-Mode (Mail And Web) Survey On Respondent Characteristics And Survey...; 2005; Brennan, M.
- The total survey error approach. A guide to the new science of survey research; 2005; Weisberg, H. F.
- The professional respondent problem in online panel surveys today; 2005; Fulgoni, G.
- Satisficing behavior in online panelists; 2005; Downes-Le Guin, T.
- Reading behavior in the digital environment: Changes in reading behavior over the past ten years; 2005; Liu, Z.
- Rating versus comparative trade-off measures. Trending changes in political issues across time and predictive...; 2005; Thomas, R. K., Behnke, S., Johnson, Al., Sanders, M.
- Publication bias: Recognizing the problem, understanding its origins and scope, and preventing harm; 2005; Dickersin, K.
- Panel proliferation and quality concerns; 2005; Faasse, J.
- Gricean effects in self-administered survey. Ph.D. Dissertation; 2005; Yan, T.
- Drop-down boxes, radio buttons, or fill-in-the-blank? Web survey scale-type effects; 2005
- Does weighting for nonresponse increase the variance of survey means?; 2005; Little, R. J., Vartivarian, S.
- Big scale observations gathered with the help of client side paradata; 2005; Haraldsen, G., Kleven, O., Sundvoll, A.
- User Interface Design and Evaluation ; 2005; Stone, D., Jarrett, C., Woodroffe, M., Minocha, S.
- Adding Value to Data Through Improved Access. The Case for Web Portals; 2005; Baker, R. P.
- Multi-Mode Research and Data Linkage. Theoretical and Practical Advice; 2005; Terhanian, G.
- Architectural Design of a Survey Questionnaire and Respondent Data Repository. Practical Considerations...; 2005; Cookson, P., Sobell, J.
- Developing and validating a nursing website evaluation questionnaire; 2005; Tsai, S. - L., Chai, S.-K.
- Workaround: Site’s surveys beat pop-up blockers, yield responses; 2005; Arnold, C.
- The Story of Subject Naught: A Cautionary but Optimistic Tale of Internet Survey Research; 2005; Konstan, J. A., Ross, M. W., Rosser, B. R. S., Stanton, J. M., Edwards, W. M.
- Standards in Online Surveys. Sources for Professional Codes of Conduct, Ethical Guidelines and Quality...; 2005; Kaczmirek, L., Schulze, N.
- Computer adaptive testing; 2005; Gershon, R. C.
- Ego control and ego-resiliency: Generalization of self-report scales based on personality descriptions...; 2005; Block, J., Funder, D. C., Letzring, T. D.
- The Web experiment list: A Web service for the recruitment of participants and archiving of Internet...; 2005; Reips, U. -D., Lengler, R.
- Survey of substance use among high school students in Taipei: Web-based questionnaire versus paper-and...; 2005; Wang, Y. C., Lee, C. M., Lew-Ting, C. Y., Hsiao, C. K., Chen, W. J.
- Web Surveys. A Brief Guide on Usability and Implementation Issues; 2005; Kaczmirek, L.
- An assessment of measurement invariance between online and mail surveys ; 2005; Deutskens, E., de Ruyter, K., Wetzels, M.
- E-mail versus Web survey response rates among health education professionals; 2005; Kittleson, M. J., Brown, S. L.
- Toward An Open-Source Methodology: What We Can Learn From The Blogosphere; 2005; M.
- Aux Abonnes Absents: Liste Rouge Et Telephone Portable Dans Les Enquetes En Population Generale Sur...; 2005; Beck, F., ., Peretti-Watel, P.
- Web Versus Paper Questionnares: A Design and Functionality - Comparison; 2005; Jones, Ja., Fraser, C., Dowling, Z.
- Web Surveys and the new Disability Discrimination Act; 2005; Macer, T.
- Mixed-mode Surveys Using Mail and Web Questionnaires; 2005; Meckel, M., Baugh, P., Walters, D.
- Sampling procedure, questionnaire design, online implementation; 2005; Jackob, N., Arens, J., Zerback, T., Jowell, R., de Rouvray, C.
- Simple Approaches to Estimating the Variance of the Propensity Score Weighted Estimator Applied on Volunteer...; 2005; Isaksson, A., Lee, S., de Rouvray, C.
- Simple Approaches to Estimating the Variance of the Propensity Score Weighted Estimator Applied on Volunteer...; 2005; Isaksson, A., Lee, S.
- Alternative Modes for Health Surveillance Surveys: An Experiment with Web, Mail, and Telephone; 2005; Link, M. W., Mokdad, A.
- An Experimental Comparison Of Web And Telephone Surveys; 2005; Fricker, S., Galesic, M., Tourangeau, R., Yan, T.
- Organizational Virtual Communities: Exploring Motivations Behind Online Panel Participation; 2005; Daugherty, T., Lee, W.-N., Gangadharbatla, H., Kim, K., Outhavong, S.
- Promoting Uniform Question Understanding in Today's and Tomorrow's Surveys; 2005; Conrad, F. G., Schober, M. F.
- Is a Web survey as effective as a mail survey? A field experiment among computer users; 2005; Kiernan, N. E., Kiernan, M., Oyler, M. A., Gilles, C.
- The effect of personalization on response rates and data quality in web surveys; 2005; Heerwegh, D., Vanhove, T., Matthijs, K., Loosveldt, G.
- When Methodology Interferes With Substance; 2005; Schoen, H., Faas, T.
- Web-based and Mailed Questionnaires: A Comparison of Response Rates and Compliance; 2005; Baelter, K., Balter, O., Fondell, E., Trolle-Lagerros, Y.
- Bleeding Edge or Proven Technology? The Fact and the Fiction of Mobile Survey Computing; 2005; Cameron, M. R.